Suppr超能文献

基于步态的任意视图变换模型的人物识别。

Gait-based person recognition using arbitrary view transformation model.

出版信息

IEEE Trans Image Process. 2015 Jan;24(1):140-54. doi: 10.1109/TIP.2014.2371335. Epub 2014 Nov 20.

Abstract

Gait recognition is a useful biometric trait for person authentication because it is usable even with low image resolution. One challenge is robustness to a view change (cross-view matching); view transformation models (VTMs) have been proposed to solve this. The VTMs work well if the target views are the same as their discrete training views. However, the gait traits are observed from an arbitrary view in a real situation. Thus, the target views may not coincide with discrete training views, resulting in recognition accuracy degradation. We propose an arbitrary VTM (AVTM) that accurately matches a pair of gait traits from an arbitrary view. To realize an AVTM, we first construct 3D gait volume sequences of training subjects, disjoint from the test subjects in the target scene. We then generate 2D gait silhouette sequences of the training subjects by projecting the 3D gait volume sequences onto the same views as the target views, and train the AVTM with gait features extracted from the 2D sequences. In addition, we extend our AVTM by incorporating a part-dependent view selection scheme (AVTM_PdVS), which divides the gait feature into several parts, and sets part-dependent destination views for transformation. Because appropriate destination views may differ for different body parts, the part-dependent destination view selection can suppress transformation errors, leading to increased recognition accuracy. Experiments using data sets collected in different settings show that the AVTM improves the accuracy of cross-view matching and that the AVTM_PdVS further improves the accuracy in many cases, in particular, verification scenarios.

摘要

步态识别是一种有用的生物特征识别方法,因为即使在图像分辨率较低的情况下,它也可以使用。一个挑战是对视图变化的稳健性(跨视图匹配);已经提出了视图变换模型(VTM)来解决这个问题。如果目标视图与离散训练视图相同,VTM 就可以很好地工作。然而,在实际情况下,步态特征是从任意视图观察到的。因此,目标视图可能与离散训练视图不重合,导致识别精度下降。我们提出了一种任意视图变换模型(AVTM),可以准确匹配任意视图的一对步态特征。为了实现 AVTM,我们首先构建训练对象的 3D 步态体积序列,这些序列与目标场景中的测试对象不相交。然后,我们通过将 3D 步态体积序列投影到与目标视图相同的视图上,生成训练对象的 2D 步态轮廓序列,并从 2D 序列中提取步态特征来训练 AVTM。此外,我们通过结合部分依赖视图选择方案(AVTM_PdVS)扩展了我们的 AVTM,该方案将步态特征划分为几个部分,并为变换设置部分依赖的目标视图。因为对于不同的身体部位,适当的目标视图可能不同,所以部分依赖的目标视图选择可以抑制变换错误,从而提高识别精度。在不同设置下收集的数据集中进行的实验表明,AVTM 提高了跨视图匹配的准确性,而 AVTM_PdVS 在许多情况下进一步提高了准确性,特别是在验证场景中。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验