Suppr超能文献

用于多模态机器人武术腿部姿势识别的图卷积网络

Graph Convolutional Networks for multi-modal robotic martial arts leg pose recognition.

作者信息

Yao Shun, Ping Yihan, Yue Xiaoyu, Chen He

机构信息

Department of Public Instruction, ChangJiang Polytechnic of Art and Engineering, Jingzhou, China.

School of Computer Science, Northwestern University, Evanston, IL, United States.

出版信息

Front Neurorobot. 2025 Jan 20;18:1520983. doi: 10.3389/fnbot.2024.1520983. eCollection 2024.

Abstract

INTRODUCTION

Accurate recognition of martial arts leg poses is essential for applications in sports analytics, rehabilitation, and human-computer interaction. Traditional pose recognition models, relying on sequential or convolutional approaches, often struggle to capture the complex spatial-temporal dependencies inherent in martial arts movements. These methods lack the ability to effectively model the nuanced dynamics of joint interactions and temporal progression, leading to limited generalization in recognizing complex actions.

METHODS

To address these challenges, we propose PoseGCN, a Graph Convolutional Network (GCN)-based model that integrates spatial, temporal, and contextual features through a novel framework. PoseGCN leverages spatial-temporal graph encoding to capture joint motion dynamics, an action-specific attention mechanism to assign importance to relevant joints depending on the action context, and a self-supervised pretext task to enhance temporal robustness and continuity. Experimental results on four benchmark datasets-Kinetics-700, Human3.6M, NTU RGB+D, and UTD-MHAD-demonstrate that PoseGCN outperforms existing models, achieving state-of-the-art accuracy and F1 scores.

RESULTS AND DISCUSSION

These findings highlight the model's capacity to generalize across diverse datasets and capture fine-grained pose details, showcasing its potential in advancing complex pose recognition tasks. The proposed framework offers a robust solution for precise action recognition and paves the way for future developments in multi-modal pose analysis.

摘要

引言

准确识别武术腿部姿势对于体育分析、康复以及人机交互等应用至关重要。传统的姿势识别模型依赖于序列或卷积方法,往往难以捕捉武术动作中固有的复杂时空依赖性。这些方法缺乏有效建模关节交互细微动态和时间进展的能力,导致在识别复杂动作时泛化能力有限。

方法

为应对这些挑战,我们提出了PoseGCN,这是一种基于图卷积网络(GCN)的模型,通过一个新颖的框架整合空间、时间和上下文特征。PoseGCN利用时空图编码来捕捉关节运动动态,一种特定动作的注意力机制根据动作上下文为相关关节分配重要性,并通过一个自监督的 pretext 任务来增强时间鲁棒性和连续性。在四个基准数据集——Kinetics-700、Human3.6M、NTU RGB+D 和 UTD-MHAD——上的实验结果表明,PoseGCN 优于现有模型,实现了当前最优的准确率和 F1 分数。

结果与讨论

这些发现凸显了该模型在不同数据集上的泛化能力以及捕捉细粒度姿势细节的能力,展示了其在推进复杂姿势识别任务方面的潜力。所提出的框架为精确动作识别提供了一个强大的解决方案,并为多模态姿势分析的未来发展铺平了道路。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7928/11792168/e727c7da4759/fnbot-18-1520983-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验