Suppr超能文献

步态集:通过将步态视为一个深度集来实现跨视角步态识别。

GaitSet: Cross-View Gait Recognition Through Utilizing Gait As a Deep Set.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Jul;44(7):3467-3478. doi: 10.1109/TPAMI.2021.3057879. Epub 2022 Jun 3.

Abstract

Gait is a unique biometric feature that can be recognized at a distance; thus, it has broad applications in crime prevention, forensic identification, and social security. To portray a gait, existing gait recognition methods utilize either a gait template which makes it difficult to preserve temporal information, or a gait sequence that maintains unnecessary sequential constraints and thus loses the flexibility of gait recognition. In this paper, we present a novel perspective that utilizes gait as a deep set, which means that a set of gait frames are integrated by a global-local fused deep network inspired by the way our left- and right-hemisphere processes information to learn information that can be used in identification. Based on this deep set perspective, our method is immune to frame permutations, and can naturally integrate frames from different videos that have been acquired under different scenarios, such as diverse viewing angles, different clothes, or different item-carrying conditions. Experiments show that under normal walking conditions, our single-model method achieves an average rank-1 accuracy of 96.1 percent on the CASIA-B gait dataset and an accuracy of 87.9 percent on the OU-MVLP gait dataset. Under various complex scenarios, our model also exhibits a high level of robustness. It achieves accuracies of 90.8 and 70.3 percent on CASIA-B under bag-carrying and coat-wearing walking conditions respectively, significantly outperforming the best existing methods. Moreover, the proposed method maintains a satisfactory accuracy even when only small numbers of frames are available in the test samples; for example, it achieves 85.0 percent on CASIA-B even when using only 7 frames. The source code has been released at https://github.com/AbnerHqC/GaitSet.

摘要

步态是一种独特的生物特征,可以远距离识别;因此,它在预防犯罪、法医鉴定和公共安全等领域有广泛的应用。为了描述步态,现有的步态识别方法要么使用步态模板,这使得很难保留时间信息,要么使用步态序列,这保持了不必要的顺序约束,从而失去了步态识别的灵活性。在本文中,我们提出了一种新的视角,利用步态作为一个深度集,这意味着一组步态帧通过一个全局-局部融合的深度网络进行集成,该网络受到我们左右半球处理信息的方式的启发,以学习可用于识别的信息。基于这个深度集的视角,我们的方法对帧的排列是免疫的,可以自然地整合来自不同视频的帧,这些视频是在不同的场景下采集的,例如不同的视角、不同的衣服或不同的携带物品的条件。实验表明,在正常行走条件下,我们的单模型方法在 CASIA-B 步态数据集上的平均排名-1 准确率为 96.1%,在 OU-MVLP 步态数据集上的准确率为 87.9%。在各种复杂场景下,我们的模型也表现出了很高的鲁棒性。在携带包和穿着外套的情况下,它在 CASIA-B 上的准确率分别达到了 90.8%和 70.3%,明显优于现有的最佳方法。此外,即使在测试样本中可用的帧数很少的情况下,该方法也能保持令人满意的准确率;例如,在 CASIA-B 上即使只使用 7 帧,它也能达到 85.0%的准确率。该方法的源代码已在 https://github.com/AbnerHqC/GaitSet 上发布。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验