• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于关键点增强的无标记姿态估计深度学习模型:哪些因素会影响生物力学应用中的误差?

A Deep Learning Model for Markerless Pose Estimation Based on Keypoint Augmentation: What Factors Influence Errors in Biomechanical Applications?

机构信息

Instituto de Biomecánica-IBV, Universitat Politècnica de València, Edifici 9C, Camí de Vera s/n, 46022 Valencia, Spain.

Instituto Universitario de Automática e Informática Industrial, Universitat Politècnica de València, Edifici 1F, Camí de Vera, s/n, 46022 Valencia, Spain.

出版信息

Sensors (Basel). 2024 Mar 17;24(6):1923. doi: 10.3390/s24061923.

DOI:10.3390/s24061923
PMID:38544186
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10974619/
Abstract

In biomechanics, movement is typically recorded by tracking the trajectories of anatomical landmarks previously marked using passive instrumentation, which entails several inconveniences. To overcome these disadvantages, researchers are exploring different markerless methods, such as pose estimation networks, to capture movement with equivalent accuracy to marker-based photogrammetry. However, pose estimation models usually only provide joint centers, which are incomplete data for calculating joint angles in all anatomical axes. Recently, marker augmentation models based on deep learning have emerged. These models transform pose estimation data into complete anatomical data. Building on this concept, this study presents three marker augmentation models of varying complexity that were compared to a photogrammetry system. The errors in anatomical landmark positions and the derived joint angles were calculated, and a statistical analysis of the errors was performed to identify the factors that most influence their magnitude. The proposed Transformer model improved upon the errors reported in the literature, yielding position errors of less than 1.5 cm for anatomical landmarks and 4.4 degrees for all seven movements evaluated. Anthropometric data did not influence the errors, while anatomical landmarks and movement influenced position errors, and model, rotation axis, and movement influenced joint angle errors.

摘要

在生物力学中,运动通常通过跟踪先前使用被动仪器标记的解剖学标志点的轨迹来记录,这涉及到一些不便之处。为了克服这些缺点,研究人员正在探索不同的无标记方法,例如姿势估计网络,以与基于标记的摄影测量术相当的精度捕捉运动。然而,姿势估计模型通常仅提供关节中心,这对于在所有解剖轴上计算关节角度来说是不完整的数据。最近,基于深度学习的标记增强模型已经出现。这些模型将姿势估计数据转换为完整的解剖学数据。基于这一概念,本研究提出了三种不同复杂程度的标记增强模型,并与摄影测量系统进行了比较。计算了解剖学标志点位置和推导的关节角度的误差,并对误差进行了统计分析,以确定对其大小影响最大的因素。所提出的 Transformer 模型改进了文献中报告的误差,使得解剖学标志点的位置误差小于 1.5 厘米,所有评估的七个运动的关节角度误差小于 4.4 度。人体测量数据对误差没有影响,而解剖学标志点和运动影响位置误差,模型、旋转轴和运动影响关节角度误差。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/41ed7f6b4787/sensors-24-01923-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/bf61e751f6c7/sensors-24-01923-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/c58e8c848f68/sensors-24-01923-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/3e2e9fac3df9/sensors-24-01923-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/8183f919495e/sensors-24-01923-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/0067c7378c90/sensors-24-01923-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/e077b0c13776/sensors-24-01923-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/7cb5b1f1f4fe/sensors-24-01923-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/2211894709d3/sensors-24-01923-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/41ed7f6b4787/sensors-24-01923-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/bf61e751f6c7/sensors-24-01923-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/c58e8c848f68/sensors-24-01923-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/3e2e9fac3df9/sensors-24-01923-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/8183f919495e/sensors-24-01923-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/0067c7378c90/sensors-24-01923-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/e077b0c13776/sensors-24-01923-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/7cb5b1f1f4fe/sensors-24-01923-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/2211894709d3/sensors-24-01923-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64b4/10974619/41ed7f6b4787/sensors-24-01923-g009.jpg

相似文献

1
A Deep Learning Model for Markerless Pose Estimation Based on Keypoint Augmentation: What Factors Influence Errors in Biomechanical Applications?基于关键点增强的无标记姿态估计深度学习模型:哪些因素会影响生物力学应用中的误差?
Sensors (Basel). 2024 Mar 17;24(6):1923. doi: 10.3390/s24061923.
2
Accuracy of a 3D temporal scanning system for gait analysis: Comparative with a marker-based photogrammetry system.3D 时间扫描系统在步态分析中的准确性:与基于标记的摄影测量系统的比较。
Gait Posture. 2022 Sep;97:28-34. doi: 10.1016/j.gaitpost.2022.07.001. Epub 2022 Jul 5.
3
Anatomical-Marker-Driven 3D Markerless Human Motion Capture.解剖标记驱动的三维无标记人体运动捕捉
IEEE J Biomed Health Inform. 2024 Jul 9;PP. doi: 10.1109/JBHI.2024.3424869.
4
Applications and limitations of current markerless motion capture methods for clinical gait biomechanics.当前无标记运动捕捉方法在临床步态生物力学中的应用及局限性。
PeerJ. 2022 Feb 25;10:e12995. doi: 10.7717/peerj.12995. eCollection 2022.
5
Estimating Ground Reaction Forces from Two-Dimensional Pose Data: A Biomechanics-Based Comparison of AlphaPose, BlazePose, and OpenPose.基于二维姿态数据估计地面反作用力:AlphaPose、BlazePose 和 OpenPose 的生物力学比较。
Sensors (Basel). 2022 Dec 21;23(1):78. doi: 10.3390/s23010078.
6
Assessment of deep learning pose estimates for sports collision tracking.运动碰撞跟踪的深度学习姿势估计评估。
J Sports Sci. 2022 Sep;40(17):1885-1900. doi: 10.1080/02640414.2022.2117474. Epub 2022 Sep 11.
7
Human movement analysis using stereophotogrammetry. Part 4: assessment of anatomical landmark misplacement and its effects on joint kinematics.使用立体摄影测量法进行人体运动分析。第4部分:解剖标志点错位的评估及其对关节运动学的影响。
Gait Posture. 2005 Feb;21(2):226-37. doi: 10.1016/j.gaitpost.2004.05.003.
8
Positioning errors of anatomical landmarks identified by fixed vertices in homologous meshes.固定顶点在同源网格中识别的解剖标志的定位误差。
Gait Posture. 2024 Feb;108:215-221. doi: 10.1016/j.gaitpost.2023.11.024. Epub 2023 Dec 1.
9
Exercise quantification from single camera view markerless 3D pose estimation.基于单摄像头视图无标记3D姿态估计的运动量化
Heliyon. 2024 Mar 12;10(6):e27596. doi: 10.1016/j.heliyon.2024.e27596. eCollection 2024 Mar 30.
10
Measurement errors in roentgen-stereophotogrammetric joint-motion analysis.X线立体摄影测量关节运动分析中的测量误差
J Biomech. 1990;23(3):259-69. doi: 10.1016/0021-9290(90)90016-v.

引用本文的文献

1
The influence of the marker set on inverse kinematics results to inform markerless motion capture annotations.标记集对逆运动学结果的影响,以指导无标记运动捕捉标注。
Sci Rep. 2025 Apr 25;15(1):14547. doi: 10.1038/s41598-025-97219-5.
2
Marker Data Enhancement for Markerless Motion Capture.用于无标记运动捕捉的标记数据增强
IEEE Trans Biomed Eng. 2025 Jun;72(6):2013-2022. doi: 10.1109/TBME.2025.3530848.
3
Marker Data Enhancement For Markerless Motion Capture.无标记运动捕捉的标记数据增强

本文引用的文献

1
Multimodal human motion dataset of 3D anatomical landmarks and pose keypoints.包含3D解剖标志点和姿态关键点的多模态人体运动数据集。
Data Brief. 2024 Feb 6;53:110157. doi: 10.1016/j.dib.2024.110157. eCollection 2024 Apr.
2
Comparison of Concurrent and Asynchronous Running Kinematics and Kinetics From Marker-Based and Markerless Motion Capture Under Varying Clothing Conditions.基于标记和无标记运动捕捉的不同服装条件下同步和异步运行运动学和动力学的比较。
J Appl Biomech. 2024 Jan 18;40(2):129-137. doi: 10.1123/jab.2023-0069. Print 2024 Apr 1.
3
Positioning errors of anatomical landmarks identified by fixed vertices in homologous meshes.
bioRxiv. 2024 Jul 17:2024.07.13.603382. doi: 10.1101/2024.07.13.603382.
固定顶点在同源网格中识别的解剖标志的定位误差。
Gait Posture. 2024 Feb;108:215-221. doi: 10.1016/j.gaitpost.2023.11.024. Epub 2023 Dec 1.
4
OpenCap: Human movement dynamics from smartphone videos.OpenCap:从智能手机视频中获取人类运动动力学。
PLoS Comput Biol. 2023 Oct 19;19(10):e1011462. doi: 10.1371/journal.pcbi.1011462. eCollection 2023 Oct.
5
Concurrent validity of smartphone-based markerless motion capturing to quantify lower-limb joint kinematics in healthy and pathological gait.基于智能手机的无标记运动捕捉在量化健康和病理步态下肢关节运动学方面的同时效度。
J Biomech. 2023 Oct;159:111801. doi: 10.1016/j.jbiomech.2023.111801. Epub 2023 Sep 17.
6
A comparison of three-dimensional kinematics between markerless and marker-based motion capture in overground gait.无标记与基于标记运动捕捉技术在地面行走中三维运动学的比较。
J Biomech. 2023 Oct;159:111793. doi: 10.1016/j.jbiomech.2023.111793. Epub 2023 Sep 7.
7
Markerless motion capture estimates of lower extremity kinematics and kinetics are comparable to marker-based across 8 movements.无标记运动捕捉估计下肢运动学和动力学与基于标记的在 8 种运动中具有可比性。
J Biomech. 2023 Aug;157:111751. doi: 10.1016/j.jbiomech.2023.111751. Epub 2023 Aug 4.
8
Comparison of kinematics between Theia markerless and conventional marker-based gait analysis in clinical patients.无标记与传统标记物步态分析在临床患者中的运动学比较。
Gait Posture. 2023 Jul;104:9-14. doi: 10.1016/j.gaitpost.2023.05.029. Epub 2023 Jun 1.
9
The development and evaluation of a fully automated markerless motion capture workflow.全自动无标记运动捕捉工作流程的开发与评估。
J Biomech. 2022 Nov;144:111338. doi: 10.1016/j.jbiomech.2022.111338. Epub 2022 Oct 2.
10
Accuracy of a 3D temporal scanning system for gait analysis: Comparative with a marker-based photogrammetry system.3D 时间扫描系统在步态分析中的准确性:与基于标记的摄影测量系统的比较。
Gait Posture. 2022 Sep;97:28-34. doi: 10.1016/j.gaitpost.2022.07.001. Epub 2022 Jul 5.