• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于多目标跟踪的深度高效数据关联:基于结构相似性指数的模糊消除增强

Deep Efficient Data Association for Multi-Object Tracking: Augmented with SSIM-Based Ambiguity Elimination.

作者信息

Prasannakumar Aswathy, Mishra Deepak

机构信息

Department of Avionics, Indian Institute of Space Science and Technology, Trivandrum 695547, Kerala, India.

出版信息

J Imaging. 2024 Jul 16;10(7):171. doi: 10.3390/jimaging10070171.

DOI:10.3390/jimaging10070171
PMID:39057742
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11277565/
Abstract

Recently, to address the multiple object tracking (MOT) problem, we harnessed the power of deep learning-based methods. The tracking-by-detection approach to multiple object tracking (MOT) involves two primary steps: object detection and data association. In the first step, objects of interest are detected in each frame of a video. The second step establishes the correspondence between these detected objects across different frames to track their trajectories. This paper proposes an efficient and unified data association method that utilizes a deep feature association network (deepFAN) to learn the associations. Additionally, the Structural Similarity Index Metric (SSIM) is employed to address uncertainties in the data association, complementing the deep feature association network. These combined association computations effectively link the current detections with the previous tracks, enhancing the overall tracking performance. To evaluate the efficiency of the proposed MOT framework, we conducted a comprehensive analysis of the popular MOT datasets, such as the MOT challenge and UA-DETRAC. The results showed that our technique performed substantially better than the current state-of-the-art methods in terms of standard MOT metrics.

摘要

最近,为了解决多目标跟踪(MOT)问题,我们利用了基于深度学习方法的强大功能。基于检测的多目标跟踪(MOT)方法涉及两个主要步骤:目标检测和数据关联。在第一步中,在视频的每一帧中检测感兴趣的目标。第二步是在不同帧之间建立这些检测到的目标之间的对应关系,以跟踪它们的轨迹。本文提出了一种高效且统一的数据关联方法,该方法利用深度特征关联网络(deepFAN)来学习关联。此外,还采用了结构相似性指数度量(SSIM)来解决数据关联中的不确定性,对深度特征关联网络进行补充。这些组合的关联计算有效地将当前检测结果与先前的轨迹联系起来,提高了整体跟踪性能。为了评估所提出的MOT框架的效率,我们对流行的MOT数据集进行了全面分析,如MOT挑战赛和UA-DETRAC。结果表明,在标准MOT指标方面,我们的技术比当前的最先进方法表现得更好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/adca04c5cc67/jimaging-10-00171-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/2f7b9cb59b1d/jimaging-10-00171-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/88f46ea494a2/jimaging-10-00171-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/8f5466c3c8be/jimaging-10-00171-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/07be57b93623/jimaging-10-00171-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/34c13b7949bd/jimaging-10-00171-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/133dbd325fe1/jimaging-10-00171-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/adca04c5cc67/jimaging-10-00171-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/2f7b9cb59b1d/jimaging-10-00171-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/88f46ea494a2/jimaging-10-00171-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/8f5466c3c8be/jimaging-10-00171-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/07be57b93623/jimaging-10-00171-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/34c13b7949bd/jimaging-10-00171-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/133dbd325fe1/jimaging-10-00171-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/54a1/11277565/adca04c5cc67/jimaging-10-00171-g007.jpg

相似文献

1
Deep Efficient Data Association for Multi-Object Tracking: Augmented with SSIM-Based Ambiguity Elimination.用于多目标跟踪的深度高效数据关联:基于结构相似性指数的模糊消除增强
J Imaging. 2024 Jul 16;10(7):171. doi: 10.3390/jimaging10070171.
2
Deep Affinity Network for Multiple Object Tracking.用于多目标跟踪的深度亲和网络
IEEE Trans Pattern Anal Mach Intell. 2021 Jan;43(1):104-119. doi: 10.1109/TPAMI.2019.2929520. Epub 2020 Dec 4.
3
Pixel-Guided Association for Multi-Object Tracking.用于多目标跟踪的像素引导关联
Sensors (Basel). 2022 Nov 18;22(22):8922. doi: 10.3390/s22228922.
4
Multiple Traffic Target Tracking with Spatial-Temporal Affinity Network.基于时空关联网络的多目标跟踪
Comput Intell Neurosci. 2022 May 23;2022:9693767. doi: 10.1155/2022/9693767. eCollection 2022.
5
A Two-Stage Data Association Approach for 3D Multi-Object Tracking.一种用于3D多目标跟踪的两阶段数据关联方法。
Sensors (Basel). 2021 Apr 21;21(9):2894. doi: 10.3390/s21092894.
6
Relation3DMOT: Exploiting Deep Affinity for 3D Multi-Object Tracking from View Aggregation.Relation3DMOT:基于视图聚合的深度关联的 3D 多目标跟踪。
Sensors (Basel). 2021 Mar 17;21(6):2113. doi: 10.3390/s21062113.
7
MSA-MOT: Multi-Stage Association for 3D Multimodality Multi-Object Tracking.MSA-MOT:用于 3D 多模态多目标跟踪的多阶段关联。
Sensors (Basel). 2022 Nov 9;22(22):8650. doi: 10.3390/s22228650.
8
Improved STNNet, A benchmark for detection, tracking, and counting crowds using Drones.改进的STNNet,一种使用无人机进行人群检测、跟踪和计数的基准。
MethodsX. 2024 Jun 25;13:102820. doi: 10.1016/j.mex.2024.102820. eCollection 2024 Dec.
9
Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.基于置信度的数据关联和判别式深度表观学习的鲁棒在线多目标跟踪。
IEEE Trans Pattern Anal Mach Intell. 2018 Mar;40(3):595-610. doi: 10.1109/TPAMI.2017.2691769. Epub 2017 Apr 6.
10
Effective Multi-Object Tracking via Global Object Models and Object Constraint Learning.通过全局对象模型和对象约束学习实现有效的多目标跟踪。
Sensors (Basel). 2022 Oct 18;22(20):7943. doi: 10.3390/s22207943.

本文引用的文献

1
Tracking-by-Counting: Using Network Flows on Crowd Density Maps for Tracking Multiple Targets.基于计数的跟踪:利用人群密度图上的网络流跟踪多个目标。
IEEE Trans Image Process. 2021;30:1439-1452. doi: 10.1109/TIP.2020.3044219. Epub 2020 Dec 29.
2
HROM: Learning High-Resolution Representation and Object-Aware Masks for Visual Object Tracking.HROM:用于视觉目标跟踪的学习高分辨率表示和对象感知掩模。
Sensors (Basel). 2020 Aug 26;20(17):4807. doi: 10.3390/s20174807.
3
Deep Affinity Network for Multiple Object Tracking.用于多目标跟踪的深度亲和网络
IEEE Trans Pattern Anal Mach Intell. 2021 Jan;43(1):104-119. doi: 10.1109/TPAMI.2019.2929520. Epub 2020 Dec 4.
4
On Detection, Data Association and Segmentation for Multi-Target Tracking.多目标跟踪的检测、数据关联与分割。
IEEE Trans Pattern Anal Mach Intell. 2019 Sep;41(9):2146-2160. doi: 10.1109/TPAMI.2018.2849374. Epub 2018 Jun 21.
5
A Hybrid Data Association Framework for Robust Online Multi-Object Tracking.一种用于稳健在线多目标跟踪的混合数据关联框架。
IEEE Trans Image Process. 2017 Dec;26(12):5667-5679. doi: 10.1109/TIP.2017.2745103. Epub 2017 Aug 25.
6
Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.基于置信度的数据关联和判别式深度表观学习的鲁棒在线多目标跟踪。
IEEE Trans Pattern Anal Mach Intell. 2018 Mar;40(3):595-610. doi: 10.1109/TPAMI.2017.2691769. Epub 2017 Apr 6.
7
A Biologically Inspired Appearance Model for Robust Visual Tracking.基于生物启发的外观模型的鲁棒视觉跟踪。
IEEE Trans Neural Netw Learn Syst. 2017 Oct;28(10):2357-2370. doi: 10.1109/TNNLS.2016.2586194. Epub 2016 Jul 19.
8
Multi-Target Tracking by Discrete-Continuous Energy Minimization.基于离散连续能量最小化的多目标跟踪。
IEEE Trans Pattern Anal Mach Intell. 2016 Oct;38(10):2054-68. doi: 10.1109/TPAMI.2015.2505309. Epub 2015 Dec 3.
9
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.空间金字塔池化在深度卷积网络中的视觉识别。
IEEE Trans Pattern Anal Mach Intell. 2015 Sep;37(9):1904-16. doi: 10.1109/TPAMI.2015.2389824.
10
Multiple Object Tracking Using K-Shortest Paths Optimization.基于 K-最短路径优化的多目标跟踪。
IEEE Trans Pattern Anal Mach Intell. 2011 Sep;33(9):1806-19. doi: 10.1109/TPAMI.2011.21. Epub 2011 Feb 4.