• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于单目和互补顶视角的多人关联与跟踪。

Multiple Human Association and Tracking From Egocentric and Complementary Top Views.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5225-5242. doi: 10.1109/TPAMI.2021.3070562. Epub 2022 Aug 4.

DOI:10.1109/TPAMI.2021.3070562
PMID:33798068
Abstract

Crowded scene surveillance can significantly benefit from combining egocentric-view and its complementary top-view cameras. A typical setting is an egocentric-view camera, e.g., a wearable camera on the ground capturing rich local details, and a top-view camera, e.g., a drone-mounted one from high altitude providing a global picture of the scene. To collaboratively analyze such complementary-view videos, an important task is to associate and track multiple people across views and over time, which is challenging and differs from classical human tracking, since we need to not only track multiple subjects in each video, but also identify the same subjects across the two complementary views. This paper formulates it as a constrained mixed integer programming problem, wherein a major challenge is how to effectively measure subjects similarity over time in each video and across two views. Although appearance and motion consistencies well apply to over-time association, they are not good at connecting two highly different complementary views. To this end, we present a spatial distribution based approach to reliable cross-view subject association. We also build a dataset to benchmark this new challenging task. Extensive experiments verify the effectiveness of our method.

摘要

拥挤场景监控可以通过结合自拍摄像机和其互补的顶视图摄像机显著受益。一个典型的设置是一个自拍摄像机,例如,地面上的可穿戴摄像机,可以捕获丰富的本地细节,以及一个顶视图摄像机,例如,从高空的无人机安装的摄像机,可以提供场景的全局画面。为了协作分析这种互补视图的视频,一个重要的任务是跨视图和随时间关联和跟踪多个对象,这是具有挑战性的,与经典的人类跟踪不同,因为我们不仅需要在每个视频中跟踪多个对象,而且需要在两个互补视图中识别相同的对象。本文将其表述为一个受约束的混合整数规划问题,其中一个主要挑战是如何有效地在每个视频中以及在两个视图之间随时间测量对象的相似性。虽然外观和运动一致性非常适用于随时间的关联,但它们不擅长连接两个非常不同的互补视图。为此,我们提出了一种基于空间分布的可靠跨视图对象关联方法。我们还构建了一个数据集来基准这个新的具有挑战性的任务。广泛的实验验证了我们方法的有效性。

相似文献

1
Multiple Human Association and Tracking From Egocentric and Complementary Top Views.基于单目和互补顶视角的多人关联与跟踪。
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5225-5242. doi: 10.1109/TPAMI.2021.3070562. Epub 2022 Aug 4.
2
Unveiling the Power of Self-Supervision for Multi-View Multi-Human Association and Tracking.揭示自我监督在多视角多人关联与跟踪中的力量。
IEEE Trans Pattern Anal Mach Intell. 2025 Jan;47(1):351-368. doi: 10.1109/TPAMI.2024.3463966. Epub 2024 Dec 4.
3
Spatio-Temporal Matching for Human Pose Estimation in Video.视频中人体姿态估计的时空匹配。
IEEE Trans Pattern Anal Mach Intell. 2016 Aug;38(8):1492-504. doi: 10.1109/TPAMI.2016.2526002. Epub 2016 Feb 4.
4
Egocentric Meets Top-View.自我中心视角与顶视图视角相遇
IEEE Trans Pattern Anal Mach Intell. 2019 Jun;41(6):1353-1366. doi: 10.1109/TPAMI.2018.2832121. Epub 2018 May 1.
5
Principal axis-based correspondence between multiple cameras for people tracking.用于人体跟踪的多摄像机间基于主轴的对应关系。
IEEE Trans Pattern Anal Mach Intell. 2006 Apr;28(4):663-71. doi: 10.1109/TPAMI.2006.80.
6
A Multi-Modal Egocentric Activity Recognition Approach towards Video Domain Generalization.一种面向视频领域泛化的多模态自我中心活动识别方法。
Sensors (Basel). 2024 Apr 12;24(8):2491. doi: 10.3390/s24082491.
7
Generating Personalized Summaries of Day Long Egocentric Videos.生成长时间自我中心视频的个性化摘要。
IEEE Trans Pattern Anal Mach Intell. 2023 Jun;45(6):6832-6845. doi: 10.1109/TPAMI.2021.3118077. Epub 2023 May 5.
8
Desktop Action Recognition From First-Person Point-of-View.基于第一人称视角的桌面行为识别。
IEEE Trans Cybern. 2019 May;49(5):1616-1628. doi: 10.1109/TCYB.2018.2806381. Epub 2018 Feb 27.
9
Distributed multi-camera multi-target association for real-time tracking.分布式多摄像机多目标关联用于实时跟踪。
Sci Rep. 2022 Jun 30;12(1):11052. doi: 10.1038/s41598-022-15000-4.
10
Cross-View Person Identification Based on Confidence-Weighted Human Pose Matching.基于置信权重人体姿态匹配的跨视角人像识别。
IEEE Trans Image Process. 2019 Aug;28(8):3821-3835. doi: 10.1109/TIP.2019.2899782. Epub 2019 Feb 15.