• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于五帧差分和深度排序的空间动态目标跟踪方法

Space dynamic target tracking method based on five-frame difference and Deepsort.

作者信息

Huang Cheng, Zeng Quanli, Xiong Fangyu, Xu Jiazhong

机构信息

Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, 150080, China.

出版信息

Sci Rep. 2024 Mar 12;14(1):6020. doi: 10.1038/s41598-024-56623-z.

DOI:10.1038/s41598-024-56623-z
PMID:38472374
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10933448/
Abstract

For the problem of space dynamic target tracking with occlusion, this paper proposes an online tracking method based on the combination between the five-frame difference and Deepsort (Simple Online and Realtime Tracking with a Deep Association Metric), which is to achieve the identification first and then tracking of the dynamic target. First of all, according to three-frame difference, the five-frame difference is improved, and through the integration with ViBe (Visual Background Extraction), the accuracy and anti-interference ability are enhanced; Secondly, the YOLOv5s (You Look Only Once) is improved using preprocessing of DWT (Discrete Wavelet Transformation) and injecting GAM (Global Attention Module), which is considered as the detector for Deepsort to solve the missing in occlusion, and the real-time and accuracy can be strengthened; Lastly, simulation results show that the proposed space dynamic target tracking can keep stable to track all dynamic targets under the background interference and occlusion, the tracking precision is improved to 93.88%. Furthermore, there is a combination with the physical depth camera D435i, experiments on target dynamics show the effectiveness and superiority of the proposed recognition and tracking algorithm in the face of strong light and occlusion.

摘要

针对存在遮挡的空间动态目标跟踪问题,本文提出一种基于五帧差分与深度排序算法(DeepSort,即基于深度关联度量的简单在线实时跟踪算法)相结合的在线跟踪方法,旨在实现动态目标的先识别后跟踪。首先,依据三帧差分对五帧差分进行改进,并通过与ViBe(视觉背景提取算法)融合,提升了准确性和抗干扰能力;其次,利用离散小波变换(DWT)预处理和注入全局注意力模块(GAM)对YOLOv5s(你只看一次)进行改进,将其作为深度排序算法的检测器以解决遮挡中的目标丢失问题,增强了实时性和准确性;最后,仿真结果表明,所提出的空间动态目标跟踪方法在背景干扰和遮挡情况下能够稳定跟踪所有动态目标,跟踪精度提高到93.88%。此外,结合物理深度相机D435i进行目标动态实验,结果表明所提出的识别与跟踪算法在强光和遮挡情况下具有有效性和优越性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/0b512e71877b/41598_2024_56623_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/d70efdfe122c/41598_2024_56623_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/8da7414ff9e5/41598_2024_56623_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/8e03616d48be/41598_2024_56623_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/f20295966f0b/41598_2024_56623_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/f4e993c5902c/41598_2024_56623_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/1ee172bf9daa/41598_2024_56623_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/96ea95358286/41598_2024_56623_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/4630590429e3/41598_2024_56623_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/7c905b5cfcf1/41598_2024_56623_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/9f5e767095f5/41598_2024_56623_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/ac5006c58717/41598_2024_56623_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/8e737541d007/41598_2024_56623_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/93efcb91e572/41598_2024_56623_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/cf1fc943dd1e/41598_2024_56623_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/a73a11326a64/41598_2024_56623_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/77b6fdaa79e7/41598_2024_56623_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/bac5c69764ea/41598_2024_56623_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/3624e65d614d/41598_2024_56623_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/58ea03ed64b3/41598_2024_56623_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/8591cbbb1f78/41598_2024_56623_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/0b512e71877b/41598_2024_56623_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/d70efdfe122c/41598_2024_56623_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/8da7414ff9e5/41598_2024_56623_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/8e03616d48be/41598_2024_56623_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/f20295966f0b/41598_2024_56623_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/f4e993c5902c/41598_2024_56623_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/1ee172bf9daa/41598_2024_56623_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/96ea95358286/41598_2024_56623_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/4630590429e3/41598_2024_56623_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/7c905b5cfcf1/41598_2024_56623_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/9f5e767095f5/41598_2024_56623_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/ac5006c58717/41598_2024_56623_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/8e737541d007/41598_2024_56623_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/93efcb91e572/41598_2024_56623_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/cf1fc943dd1e/41598_2024_56623_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/a73a11326a64/41598_2024_56623_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/77b6fdaa79e7/41598_2024_56623_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/bac5c69764ea/41598_2024_56623_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/3624e65d614d/41598_2024_56623_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/58ea03ed64b3/41598_2024_56623_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/8591cbbb1f78/41598_2024_56623_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dda7/10933448/0b512e71877b/41598_2024_56623_Fig21_HTML.jpg

相似文献

1
Space dynamic target tracking method based on five-frame difference and Deepsort.基于五帧差分和深度排序的空间动态目标跟踪方法
Sci Rep. 2024 Mar 12;14(1):6020. doi: 10.1038/s41598-024-56623-z.
2
Multi-objective pedestrian tracking method based on YOLOv8 and improved DeepSORT.基于YOLOv8和改进的DeepSORT的多目标行人跟踪方法
Math Biosci Eng. 2024 Jan 3;21(2):1791-1805. doi: 10.3934/mbe.2024077.
3
Green pepper fruits counting based on improved DeepSort and optimized Yolov5s.基于改进的DeepSort和优化的Yolov5s的青椒果实计数
Front Plant Sci. 2024 Jul 16;15:1417682. doi: 10.3389/fpls.2024.1417682. eCollection 2024.
4
Benchmarking YOLOv5 and YOLOv7 models with DeepSORT for droplet tracking applications.基准测试 DeepSORT 与 YOLOv5 和 YOLOv7 模型在液滴跟踪应用中的性能。
Eur Phys J E Soft Matter. 2023 May 8;46(5):32. doi: 10.1140/epje/s10189-023-00290-x.
5
Research on the Method of Counting Wheat Ears via Video Based on Improved YOLOv7 and DeepSort.基于改进的 YOLOv7 和 DeepSort 的视频小麦穗计数方法研究。
Sensors (Basel). 2023 May 18;23(10):4880. doi: 10.3390/s23104880.
6
Achieving Adaptive Visual Multi-Object Tracking with Unscented Kalman Filter.基于无迹卡尔曼滤波的自适应视觉多目标跟踪
Sensors (Basel). 2022 Nov 23;22(23):9106. doi: 10.3390/s22239106.
7
COVID-19 risk reduce based YOLOv4-P6-FaceMask detector and DeepSORT tracker.基于COVID-19风险降低的YOLOv4-P6口罩检测器和DeepSORT跟踪器。
Multimed Tools Appl. 2023;82(15):23569-23593. doi: 10.1007/s11042-022-14251-7. Epub 2022 Nov 25.
8
Improved DeepSORT-Based Object Tracking in Foggy Weather for AVs Using Sematic Labels and Fused Appearance Feature Network.基于改进深度排序的自动驾驶车辆在雾天中使用语义标签和融合外观特征网络的目标跟踪
Sensors (Basel). 2024 Jul 19;24(14):4692. doi: 10.3390/s24144692.
9
Helmet-Wearing Tracking Detection Based on StrongSORT.基于 StrongSORT 的戴盔检测。
Sensors (Basel). 2023 Feb 3;23(3):1682. doi: 10.3390/s23031682.
10
Research on the Recognition and Tracking of Group-Housed Pigs' Posture Based on Edge Computing.基于边缘计算的群养猪姿态识别与跟踪研究。
Sensors (Basel). 2023 Nov 3;23(21):8952. doi: 10.3390/s23218952.

引用本文的文献

1
Port terminal mobile recognition based on combined YOLOv5s-DeepSort.基于YOLOv5s-DeepSort组合的港口码头移动目标识别
PLoS One. 2025 Jul 10;20(7):e0326376. doi: 10.1371/journal.pone.0326376. eCollection 2025.

本文引用的文献

1
Efficient Online Object Tracking Scheme for Challenging Scenarios.高效在线目标跟踪方案:应对挑战性场景
Sensors (Basel). 2021 Dec 20;21(24):8481. doi: 10.3390/s21248481.
2
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.