Suppr超能文献

人工智能智能手机无标记运动捕捉在反跳期间髋关节、膝关节和踝关节运动学。

AI-smartphone markerless motion capturing of hip, knee, and ankle joint kinematics during countermovement jumps.

机构信息

Department of Sport Science, Human Performance Research Centre, University of Konstanz, Konstanz, Germany.

Subsequent GmbH, Konstanz, Germany.

出版信息

Eur J Sport Sci. 2024 Oct;24(10):1452-1462. doi: 10.1002/ejsc.12186. Epub 2024 Aug 28.

Abstract

Recently, AI-driven skeleton reconstruction tools that use multistage computer vision pipelines were designed to estimate 3D kinematics from 2D video sequences. In the present study, we validated a novel markerless, smartphone video-based artificial intelligence (AI) motion capture system for hip, knee, and ankle angles during countermovement jumps (CMJs). Eleven participants performed six CMJs. We used 2D videos created by a smartphone (Apple iPhone X, 4K, 60 fps) to create 24 different keypoints, which together built a full skeleton including joints and their connections. Body parts and skeletal keypoints were localized by calculating confidence maps using a multilevel convolutional neural network that integrated both spatial and temporal features. We calculated hip, knee, and ankle angles in the sagittal plane and compared it with the angles measured by a VICON system. We calculated the correlation between both method's angular progressions, mean squared error (MSE), mean average error (MAE), and the maximum and minimum angular error and run statistical parametric mapping (SPM) analysis. Pearson correlation coefficients (r) for hip, knee, and ankle angular progressions in the sagittal plane during the entire movement were 0.96, 0.99, and 0.87, respectively. SPM group-analysis revealed some significant differences only for ankle angular progression. MSE was below 5.7°, MAE was below 4.5°, and error for maximum amplitudes was below 3.2°. The smartphone AI motion capture system with the trained multistage computer vision pipeline was able to detect, especially hip and knee angles in the sagittal plane during CMJs with high precision from a frontal view only.

摘要

最近,设计了使用多阶段计算机视觉管道的 AI 驱动骨骼重建工具,以从 2D 视频序列估计 3D 运动学。在本研究中,我们验证了一种新颖的无标记、基于智能手机视频的人工智能(AI)运动捕捉系统,用于在反跳(CMJ)期间测量髋关节、膝关节和踝关节角度。11 名参与者进行了 6 次 CMJ。我们使用智能手机(Apple iPhone X,4K,60 fps)创建的 2D 视频创建了 24 个不同的关键点,这些关键点共同构建了一个完整的骨骼,包括关节及其连接。使用集成了空间和时间特征的多级卷积神经网络计算置信度图来定位身体部位和骨骼关键点。我们计算了矢状面的髋关节、膝关节和踝关节角度,并将其与 VICON 系统测量的角度进行比较。我们计算了两种方法的角度进展之间的相关性、均方误差(MSE)、平均绝对误差(MAE)以及最大和最小角度误差,并进行了统计参数映射(SPM)分析。整个运动过程中矢状面髋关节、膝关节和踝关节角度的 Pearson 相关系数(r)分别为 0.96、0.99 和 0.87。SPM 组分析仅显示出踝关节角度进展存在一些显著差异。MSE 低于 5.7°,MAE 低于 4.5°,最大幅度误差低于 3.2°。带有训练有素的多阶段计算机视觉管道的智能手机 AI 运动捕捉系统仅从正面视图就能够以高精度检测到 CMJ 期间的髋关节和膝关节角度,特别是矢状面的角度。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c12/11451555/a599daba87d8/EJSC-24-1452-g004.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验