Suppr超能文献

基于无约束人体结构学习的人体运动增强和恢复。

Human Motion Enhancement and Restoration via Unconstrained Human Structure Learning.

机构信息

Graduate School of Information Science and Electrical Engineering, Kyushu University, Fukuoka 819-0395, Japan.

Faculty of Arts and Science, Kyushu University, Fukuoka 819-0395, Japan.

出版信息

Sensors (Basel). 2024 May 14;24(10):3123. doi: 10.3390/s24103123.

Abstract

Human motion capture technology, which leverages sensors to track the movement trajectories of key skeleton points, has been progressively transitioning from industrial applications to broader civilian applications in recent years. It finds extensive use in fields such as game development, digital human modeling, and sport science. However, the affordability of these sensors often compromises the accuracy of motion data. Low-cost motion capture methods often lead to errors in the captured motion data. We introduce a novel approach for human motion reconstruction and enhancement using spatio-temporal attention-based graph convolutional networks (ST-ATGCNs), which efficiently learn the human skeleton structure and the motion logic without requiring prior human kinematic knowledge. This method enables unsupervised motion data restoration and significantly reduces the costs associated with obtaining precise motion capture data. Our experiments, conducted on two extensive motion datasets and with real motion capture sensors such as the SONY (Tokyo, Japan) mocopi, demonstrate the method's effectiveness in enhancing the quality of low-precision motion capture data. The experiments indicate the ST-ATGCN's potential to improve both the accessibility and accuracy of motion capture technology.

摘要

近年来,人体运动捕捉技术逐渐从工业应用拓展到更广泛的民用领域,该技术利用传感器来跟踪关键骨骼点的运动轨迹。它在游戏开发、数字人体建模和运动科学等领域得到了广泛应用。然而,这些传感器的价格往往会影响运动数据的准确性。低成本的运动捕捉方法通常会导致所捕捉运动数据的误差。我们提出了一种使用基于时空注意力的图卷积网络(ST-ATGCN)进行人体运动重建和增强的新方法,该方法无需先验人体运动学知识即可有效地学习人体骨骼结构和运动逻辑。该方法可以实现对运动数据的无监督恢复,并显著降低获取精确运动捕捉数据的成本。我们在两个广泛的运动数据集上进行了实验,并使用了真实的运动捕捉传感器,如索尼(日本东京)mocopi,实验结果表明该方法能够有效提升低精度运动捕捉数据的质量。实验结果表明,ST-ATGCN 有潜力提高运动捕捉技术的易用性和准确性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e4a6/11125183/ebed2655e80d/sensors-24-03123-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验