• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过全局对象模型和对象约束学习实现有效的多目标跟踪。

Effective Multi-Object Tracking via Global Object Models and Object Constraint Learning.

机构信息

Vision and Learning Laboratory, Department of Computer Engineering, Inha University, Incheon 22212, Korea.

出版信息

Sensors (Basel). 2022 Oct 18;22(20):7943. doi: 10.3390/s22207943.

DOI:10.3390/s22207943
PMID:36298293
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9609386/
Abstract

Effective multi-object tracking is still challenging due to the trade-off between tracking accuracy and speed. Because the recent multi-object tracking (MOT) methods leverage object appearance and motion models so as to associate detections between consecutive frames, the key for effective multi-object tracking is to reduce the computational complexity of learning both models. To this end, this work proposes global appearance and motion models to discriminate multiple objects instead of learning local object-specific models. In concrete detail, it learns a global appearance model using contrastive learning between object appearances. In addition, we learn a global relation motion model using relative motion learning between objects. Moreover, this paper proposes object constraint learning for improving tracking efficiency. This study considers the discriminability of the models as a constraint, and learns both models when inconsistency with the constraint occurs. Therefore, object constraint learning differs from the conventional online learning for multi-object tracking which updates learnable parameters per frame. This work incorporates global models and object constraint learning into the confidence-based association method, and compare our tracker with the state-of-the-art methods on public available MOT Challenge datasets. As a result, we achieve 64.5% MOTA (multi-object tracking accuracy) and 6.54 Hz tracking speed on the MOT16 test dataset. The comparison results show that our methods can contribute to improve tracking accuracy and tracking speed together.

摘要

由于跟踪精度和速度之间的权衡,有效的多目标跟踪仍然具有挑战性。由于最近的多目标跟踪 (MOT) 方法利用目标外观和运动模型来关联连续帧之间的检测,因此有效多目标跟踪的关键是降低学习这两种模型的计算复杂度。为此,这项工作提出了全局外观和运动模型来区分多个对象,而不是学习特定于对象的局部模型。具体来说,它使用对象外观之间的对比学习来学习全局外观模型。此外,我们使用对象之间的相对运动学习来学习全局关系运动模型。此外,本文提出了目标约束学习来提高跟踪效率。本研究将模型的可辨别性视为约束条件,并在出现不一致时学习这两种模型。因此,目标约束学习与传统的多目标跟踪在线学习不同,后者每帧更新可学习参数。这项工作将全局模型和目标约束学习纳入置信度关联方法中,并在公共可用的 MOT 挑战赛数据集上与最先进的方法进行比较。结果,我们在 MOT16 测试数据集上实现了 64.5%的 MOTA(多目标跟踪精度)和 6.54Hz 的跟踪速度。比较结果表明,我们的方法可以有助于提高跟踪精度和跟踪速度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/6288ead7f1c6/sensors-22-07943-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/37f11ffa84fa/sensors-22-07943-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/3e2eb1158817/sensors-22-07943-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/2f1b1f45fd72/sensors-22-07943-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/279af1c3dbde/sensors-22-07943-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/9db163d46076/sensors-22-07943-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/6f9a671ccc89/sensors-22-07943-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/cbcd10c0acb3/sensors-22-07943-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/3db510cc2067/sensors-22-07943-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/7b657c96bbf5/sensors-22-07943-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/6288ead7f1c6/sensors-22-07943-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/37f11ffa84fa/sensors-22-07943-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/3e2eb1158817/sensors-22-07943-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/2f1b1f45fd72/sensors-22-07943-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/279af1c3dbde/sensors-22-07943-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/9db163d46076/sensors-22-07943-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/6f9a671ccc89/sensors-22-07943-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/cbcd10c0acb3/sensors-22-07943-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/3db510cc2067/sensors-22-07943-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/7b657c96bbf5/sensors-22-07943-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a6e2/9609386/6288ead7f1c6/sensors-22-07943-g010.jpg

相似文献

1
Effective Multi-Object Tracking via Global Object Models and Object Constraint Learning.通过全局对象模型和对象约束学习实现有效的多目标跟踪。
Sensors (Basel). 2022 Oct 18;22(20):7943. doi: 10.3390/s22207943.
2
Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.基于置信度的数据关联和判别式深度表观学习的鲁棒在线多目标跟踪。
IEEE Trans Pattern Anal Mach Intell. 2018 Mar;40(3):595-610. doi: 10.1109/TPAMI.2017.2691769. Epub 2017 Apr 6.
3
MotionTrack: Learning motion predictor for multiple object tracking.MotionTrack:用于多目标跟踪的运动预测器学习。
Neural Netw. 2024 Nov;179:106539. doi: 10.1016/j.neunet.2024.106539. Epub 2024 Jul 17.
4
Multi-appearance segmentation and extended 0-1 programming for dense small object tracking.多外观分割和扩展的 0-1 规划用于密集小目标跟踪。
PLoS One. 2018 Oct 31;13(10):e0206168. doi: 10.1371/journal.pone.0206168. eCollection 2018.
5
MBT3D: Deep learning based multi-object tracker for bumblebee 3D flight path estimation.基于深度学习的大黄蜂 3D 飞行轨迹估计的多目标跟踪器
PLoS One. 2023 Sep 22;18(9):e0291415. doi: 10.1371/journal.pone.0291415. eCollection 2023.
6
Pixel-Guided Association for Multi-Object Tracking.用于多目标跟踪的像素引导关联
Sensors (Basel). 2022 Nov 18;22(22):8922. doi: 10.3390/s22228922.
7
Efficient Single-Shot Multi-Object Tracking for Vehicles in Traffic Scenarios.交通场景下的高效单镜头多车辆目标跟踪。
Sensors (Basel). 2021 Sep 23;21(19):6358. doi: 10.3390/s21196358.
8
Deep Affinity Network for Multiple Object Tracking.用于多目标跟踪的深度亲和网络
IEEE Trans Pattern Anal Mach Intell. 2021 Jan;43(1):104-119. doi: 10.1109/TPAMI.2019.2929520. Epub 2020 Dec 4.
9
Tracking Beyond Detection: Learning a Global Response Map for End-to-End Multi-Object Tracking.超越检测的跟踪:学习用于端到端多目标跟踪的全局响应图
IEEE Trans Image Process. 2021;30:8222-8235. doi: 10.1109/TIP.2021.3113169. Epub 2021 Sep 30.
10
Multiple Traffic Target Tracking with Spatial-Temporal Affinity Network.基于时空关联网络的多目标跟踪
Comput Intell Neurosci. 2022 May 23;2022:9693767. doi: 10.1155/2022/9693767. eCollection 2022.

引用本文的文献

1
Multi-Target Tracking Based on a Combined Attention Mechanism and Occlusion Sensing in a Behavior-Analysis System.基于行为分析系统中的联合注意力机制和遮挡感知的多目标跟踪。
Sensors (Basel). 2023 Mar 8;23(6):2956. doi: 10.3390/s23062956.
2
Achieving Adaptive Visual Multi-Object Tracking with Unscented Kalman Filter.基于无迹卡尔曼滤波的自适应视觉多目标跟踪
Sensors (Basel). 2022 Nov 23;22(23):9106. doi: 10.3390/s22239106.

本文引用的文献

1
Identity-Quantity Harmonic Multi-Object Tracking.身份-数量调和多目标跟踪
IEEE Trans Image Process. 2022;31:2201-2215. doi: 10.1109/TIP.2022.3154286. Epub 2022 Mar 8.
2
Multi-Object Tracking with Correlation Filter for Autonomous Vehicle.基于相关滤波的自动驾驶车辆多目标跟踪。
Sensors (Basel). 2018 Jun 22;18(7):2004. doi: 10.3390/s18072004.
3
Greedy Batch-Based Minimum-Cost Flows for Tracking Multiple Objects.基于贪心批量的最小成本流算法用于跟踪多个目标。
IEEE Trans Image Process. 2017 Oct;26(10):4765-4776. doi: 10.1109/TIP.2017.2723239. Epub 2017 Jul 4.
4
Confidence-Based Data Association and Discriminative Deep Appearance Learning for Robust Online Multi-Object Tracking.基于置信度的数据关联和判别式深度表观学习的鲁棒在线多目标跟踪。
IEEE Trans Pattern Anal Mach Intell. 2018 Mar;40(3):595-610. doi: 10.1109/TPAMI.2017.2691769. Epub 2017 Apr 6.
5
Tracklet Association by Online Target-Specific Metric Learning and Coherent Dynamics Estimation.基于在线目标特定度量学习和连贯动力学估计的航迹关联。
IEEE Trans Pattern Anal Mach Intell. 2017 Mar;39(3):589-602. doi: 10.1109/TPAMI.2016.2551245. Epub 2016 Apr 6.
6
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.更快的 R-CNN:基于区域建议网络的实时目标检测。
IEEE Trans Pattern Anal Mach Intell. 2017 Jun;39(6):1137-1149. doi: 10.1109/TPAMI.2016.2577031. Epub 2016 Jun 6.
7
Object detection with discriminatively trained part-based models.基于判别式训练的部件模型的目标检测。
IEEE Trans Pattern Anal Mach Intell. 2010 Sep;32(9):1627-45. doi: 10.1109/TPAMI.2009.167.
8
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.