Suppr超能文献

学习用于人体姿态跟踪的跟踪和估计集成图形模型。

Learning a Tracking and Estimation Integrated Graphical Model for Human Pose Tracking.

出版信息

IEEE Trans Neural Netw Learn Syst. 2015 Dec;26(12):3176-86. doi: 10.1109/TNNLS.2015.2411287. Epub 2015 Mar 27.

Abstract

We investigate the tracking of 2-D human poses in a video stream to determine the spatial configuration of body parts in each frame, but this is not a trivial task because people may wear different kinds of clothing and may move very quickly and unpredictably. The technology of pose estimation is typically applied, but it ignores the temporal context and cannot provide smooth, reliable tracking results. Therefore, we develop a tracking and estimation integrated model (TEIM) to fully exploit temporal information by integrating pose estimation with visual tracking. However, joint parsing of multiple articulated parts over time is difficult, because a full model with edges capturing all pairwise relationships within and between frames is loopy and intractable. In previous models, approximate inference was usually resorted to, but it cannot promise good results and the computational cost is large. We overcome these problems by exploring the idea of divide and conquer, which decomposes the full model into two much simpler tractable submodels. In addition, a novel two-step iteration strategy is proposed to efficiently conquer the joint parsing problem. Algorithmically, we design TEIM very carefully so that: 1) it enables pose estimation and visual tracking to compensate for each other to achieve desirable tracking results; 2) it is able to deal with the problem of tracking loss; and 3) it only needs past information and is capable of tracking online. Experiments are conducted on two public data sets in the wild with ground truth layout annotations, and the experimental results indicate the effectiveness of the proposed TEIM framework.

摘要

我们研究了视频流中 2-D 人体姿势的跟踪,以确定每一帧中身体部位的空间配置,但这不是一项简单的任务,因为人们可能会穿着不同种类的衣服,并且可能会非常快速和不可预测地移动。通常应用姿势估计技术,但它忽略了时间上下文,无法提供平滑、可靠的跟踪结果。因此,我们开发了一种跟踪和估计集成模型 (TEIM),通过将姿势估计与视觉跟踪相结合,充分利用时间信息。然而,随着时间的推移,对多个铰接部分的联合解析是困难的,因为具有捕获帧内和帧间所有成对关系的边缘的完整模型是循环的和难以处理的。在以前的模型中,通常采用近似推理,但不能保证好的结果,而且计算成本很大。我们通过探索分而治之的思想克服了这些问题,该思想将完整模型分解为两个简单得多的可处理子模型。此外,还提出了一种新颖的两步迭代策略来有效地解决联合解析问题。在算法上,我们非常仔细地设计 TEIM,以便:1)它能够使姿势估计和视觉跟踪相互补偿,以达到理想的跟踪结果;2)它能够处理跟踪丢失的问题;3)它只需要过去的信息,并能够在线跟踪。在具有地面真实布局注释的两个公共野外数据集上进行了实验,实验结果表明了所提出的 TEIM 框架的有效性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验