Suppr超能文献

基于多姿态估计算法的跨视角步态识别中的多生物特征提取

Multi-Biometric Feature Extraction from Multiple Pose Estimation Algorithms for Cross-View Gait Recognition.

作者信息

Ray Ausrukona, Uddin Md Zasim, Hasan Kamrul, Melody Zinat Rahman, Sarker Prodip Kumar, Ahad Md Atiqur Rahman

机构信息

Department of Computer Science and Engineering, Begum Rokeya University, Rangpur 5404, Bangladesh.

Department of Electrical and Electronic Engineering, Begum Rokeya University, Rangpur 5404, Bangladesh.

出版信息

Sensors (Basel). 2024 Nov 30;24(23):7669. doi: 10.3390/s24237669.

Abstract

Gait recognition is a behavioral biometric technique that identifies individuals based on their unique walking patterns, enabling long-distance identification. Traditional gait recognition methods rely on appearance-based approaches that utilize background-subtracted silhouette sequences to extract gait features. While effective and easy to compute, these methods are susceptible to variations in clothing, carried objects, and illumination changes, compromising the extraction of discriminative features in real-world applications. In contrast, model-based approaches using skeletal key points offer robustness against these covariates. Advances in human pose estimation (HPE) algorithms using convolutional neural networks (CNNs) have facilitated the extraction of skeletal key points, addressing some challenges of model-based approaches. However, the performance of skeleton-based methods still lags behind that of appearance-based approaches. This paper aims to bridge this performance gap by introducing a multi-biometric framework that extracts features from multiple HPE algorithms for gait recognition, employing feature-level fusion (FLF) and decision-level fusion (DLF) by leveraging a single-source multi-sample technique. We utilized state-of-the-art HPE algorithms, OpenPose, AlphaPose, and HRNet, to generate diverse skeleton data samples from a single source video. Subsequently, we employed a residual graph convolutional network (ResGCN) to extract features from the generated skeleton data. In the FLF approach, the features extracted from ResGCN and applied to the skeleton data samples generated by multiple HPE algorithms are aggregated point-wise for gait recognition, while in the DLF approach, the decisions of ResGCN applied to each skeleton data sample are integrated using majority voting for the final recognition. Our proposed method demonstrated state-of-the-art skeleton-based cross-view gait recognition performance on a popular dataset, CASIA-B.

摘要

步态识别是一种行为生物识别技术,它基于个体独特的行走模式来识别个体,从而实现远距离识别。传统的步态识别方法依赖于基于外观的方法,这些方法利用背景减除后的轮廓序列来提取步态特征。虽然这些方法有效且易于计算,但它们容易受到服装、携带物品和光照变化的影响,从而影响在实际应用中提取有区分性的特征。相比之下,使用骨骼关键点的基于模型的方法对这些协变量具有鲁棒性。使用卷积神经网络(CNN)的人体姿态估计(HPE)算法的进展促进了骨骼关键点的提取,解决了基于模型的方法的一些挑战。然而,基于骨骼的方法的性能仍然落后于基于外观的方法。本文旨在通过引入一个多生物识别框架来弥合这一性能差距,该框架从多个HPE算法中提取特征用于步态识别,通过利用单源多样本技术采用特征级融合(FLF)和决策级融合(DLF)。我们使用了先进的HPE算法OpenPose、AlphaPose和HRNet,从单个源视频生成多样的骨骼数据样本。随后,我们使用残差图卷积网络(ResGCN)从生成的骨骼数据中提取特征。在FLF方法中,从ResGCN提取并应用于由多个HPE算法生成的骨骼数据样本的特征逐点聚合用于步态识别,而在DLF方法中,应用于每个骨骼数据样本的ResGCN的决策使用多数投票进行整合以进行最终识别。我们提出的方法在流行的数据集CASIA - B上展示了基于骨骼的跨视角步态识别的先进性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b3a1/11645053/23a97651beb9/sensors-24-07669-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验